Goto

Collaborating Authors

A Proposal for a Unified Agent Behaviour Framework

AAAI Conferences

Games have used different mechanisms along its history to provide agent behavior: FSMs, utility systems, behavior trees and planning methods. In this paper, we present an architecture that aims at incorporating all these approaches into trees of event handling nodes with behaviours as leaves, using rules for combining actions akin to utility systems. This formalism aims to develop hybrid systems in an easier way.


Transfer Learning versus Multi-agent Learning regarding Distributed Decision-Making in Highway Traffic

arXiv.org Artificial Intelligence

Transportation and traffic are currently undergoing a rapid increase in terms of both scale and complexity. At the same time, an increasing share of traffic participants are being transformed into agents driven or supported by artificial intelligence resulting in mixed-intelligence traffic. This work explores the implications of distributed decision-making in mixed-intelligence traffic. The investigations are carried out on the basis of an online-simulated highway scenario, namely the MIT \emph{DeepTraffic} simulation. In the first step traffic agents are trained by means of a deep reinforcement learning approach, being deployed inside an elitist evolutionary algorithm for hyperparameter search. The resulting architectures and training parameters are then utilized in order to either train a single autonomous traffic agent and transfer the learned weights onto a multi-agent scenario or else to conduct multi-agent learning directly. Both learning strategies are evaluated on different ratios of mixed-intelligence traffic. The strategies are assessed according to the average speed of all agents driven by artificial intelligence. Traffic patterns that provoke a reduction in traffic flow are analyzed with respect to the different strategies.


A Coupled Operational Semantics for Goals and Commitments

Journal of Artificial Intelligence Research

Commitments capture how an agent relates to another agent, whereas goals describe states of the world that an agent is motivated to bring about.  Commitments are elements of the social state of a set of agents whereas goals are elements of the private states of individual agents.  It makes intuitive sense that goals and commitments are understood as being complementary to each other. More importantly, an agent's goals and commitments ought to be coherent, in the sense that an agent's goals would lead it to adopt or modify relevant commitments and an agent's commitments would lead it to adopt or modify relevant goals.  However, despite the intuitive naturalness of the above connections, they have not been adequately studied in a formal framework. This article provides a combined operational semantics for goals and commitments by relating their respective life cycles as a basis for how these concepts (1) cohere for an individual agent and (2) engender cooperation among agents.  Our semantics yields important desirable properties of convergence of the configurations of cooperating agents, thereby delineating some theoretically well-founded yet practical modes of cooperation in a multiagent system.


Multi-Strategy Learning of Robotic Behaviours via Qualitative Reasoning

AAAI Conferences

When given a task, an autonomous agent must plan a series of actions to perform in order to complete the goal. In robotics, planners face additional challenges as the domain is typically large (even infinite) continuous, noisy, and non- deterministic. Typically stochastic planning has been used to solve robotic control tasks. Such planners have been very successful in their various domains. The downside to such approaches is that the models and planners are highly specialised to a single control task. To change the control task, requires developing an entirely new planner. The research in my thesis focuses on the problem of specialisation in continuous, noisy and non-deterministic robotic domains by developing a more generic planner. It builds on previous research in the area, specifically using the technique of Multi-Strategy Learning. Qualitative Modelling and Qualitative Reasoning is used to provide the generality, from which specific, Quantitative controllers can be quickly learnt. The resulting system is applied to a real world robotic platform for rough terrain navigation.


Social Norms for Self-Policing Multi-agent Systems and Virtual Societies

AAAI Conferences

Social norms are one of the mechanisms for decentralized societies to achieve coordination amongst individuals. Such norms are conflict resolution strategies that develop from the population interactions instead of a centralized entity dictating agent protocol.One of the most important characteristics of social norms is that they are imposed by the members of the society, and they are responsible for the fulfillment and defense of these norms. By allowing agents to manage (impose, abide by and defend) social norms, societies achieve a higher degree of freedom by lacking the necessity of authorities supervising all the interactions amongst agents. In this article we summarize the contributions of my dissertation, where we provide an unifying framework for the analysis of social norms in virtual societies, providing an strong emphasis on virtual agents and humans.