Goto

Collaborating Authors

 Sammut, Claude


Human-Centric Goal Reasoning with Ripple-Down Rules

arXiv.org Artificial Intelligence

ActorSim is a goal reasoning framework developed at the Naval Research Laboratory. Originally, all goal reasoning rules were hand-crafted. This work extends ActorSim with the capability of learning by demonstration, that is, when a human trainer disagrees with a decision made by the system, the trainer can take over and show the system the correct decision. The learning component uses Ripple-Down Rules (RDR) to build new decision rules to correctly handle similar cases in the future. The system is demonstrated using the RoboCup Rescue Agent Simulation, which simulates a city-wide disaster, requiring emergency services, including fire, ambulance and police, to be dispatched to different sites to evacuate civilians from dangerous situations. The RDRs are implemented in a scripting language, FrameScript, which is used to mediate between ActorSim and the agent simulator. Using Ripple-Down Rules, ActorSim can scale to an order of magnitude more goals than the previous version.


Online Learning and Planning in Cognitive Hierarchies

arXiv.org Artificial Intelligence

Complex robot behaviour typically requires the integration of multiple robotic and Artificial Intelligence (AI) techniques and components. Integrating such disparate components into a coherent system, while also ensuring global properties and behaviours, is a significant challenge for cognitive robotics. Using a formal framework to model the interactions between components can be an important step in dealing with this challenge. In this paper we extend an existing formal framework [Clark et al., 2016] to model complex integrated reasoning behaviours of robotic systems; from symbolic planning through to online learning of policies and transition systems. Furthermore the new framework allows for a more flexible modelling of the interactions between different reasoning components.


Qualitative Planning with Quantitative Constraints for Online Learning of Robotic Behaviours

AAAI Conferences

This paper resolves previous problems in the Multi-Strategy architecture for online learning of robotic behaviours. The hybrid method includes a symbolic qualitative planner that constructs an approximate solution to a control problem. The approximate solution provides constraints for a numerical optimisation algorithm, which is used to refine the qualitative plan into an operational policy. Introducing quantitative constraints into the planner gives previously unachievable domain independent reasoning. The method is demonstrated on a multi-tracked robot intended for urban search and rescue.


Tool Use Learning in Robots

AAAI Conferences

Learning to use an object as a tool requires understanding what goals it helps to achieve, the properties of the tool that make it useful and how the tool must be manipulated to achieve the goal. We present a method that allows a robot to learn about objects in this way and thereby employ them as tools. An initial hypothesis for an action model of tool use is created by observing another agent accomplishing a task using a tool. The robot then refines its hypothesis by active learning, generating new experiments and observing the outcomes. Hypotheses are updated using Inductive Logic Programming. One of the novel aspects of this work is the method used to select experiments so that the search through the hypothesis space is minimised.


Controlling a Black-Box Simulation of a Spacecraft

AI Magazine

The goal of this research is to learn to control the attitude of an orbiting satellite. To this end, we are investigating the possibility of using adaptive controllers for such tasks. Laboratory tests have suggested that rule-based methods can be more robust than systems developed using traditional control theory. The BOXES learning system, which has already met with success in simulated laboratory tasks, is an effective design framework for this new exercise.


Controlling a Black-Box Simulation of a Spacecraft

AI Magazine

This article reports on experiments performed using a black-box simulation of a spacecraft. The goal of this research is to learn to control the attitude of an orbiting satellite. The space-craft must be able to operate with minimal human supervision. To this end, we are investigating the possibility of using adaptive controllers for such tasks. Laboratory tests have suggested that rule-based methods can be more robust than systems developed using traditional control theory. The BOXES learning system, which has already met with success in simulated laboratory tasks, is an effective design framework for this new exercise.