Goto

Collaborating Authors

 Hawes, Nick


Learning Micro-Management Skills in RTS Games by Imitating Experts

AAAI Conferences

We investigate the problem of learning the control of small groups of units in combat situations in Real Time Strategy (RTS) games. AI systems may acquire such skills by observing and learning from expert players, or other AI systems performing those tasks. However, access to training data may be limited, and representations based on metric information -- position, velocity, orientation etc. -- may be brittle, difficult for learning mechanisms to work with, and generalise poorly to new situations. In this work we apply \textit{qualitative spatial relations} to compress such continuous, metric state-spaces into symbolic states, and show that this makes the learning problem easier, and allows for more general models of behaviour. Models learnt from this representation are used to control situated agents, and imitate the observed behaviour of both synthetic (pre-programmed) agents, as well as the behaviour of human-controlled agents on a number of canonical micro-management tasks. We show how a Monte-Carlo method can be used to decompress qualitative data back in to quantitative data for practical use in our control system. We present our work applied to the popular RTS game Starcraft.


Reports of the 2014 AAAI Spring Symposium Series

AI Magazine

The Association for the Advancement of Artificial Intelligence was pleased to present the AAAI 2014 Spring Symposium Series, held Monday through Wednesday, March 24โ€“26, 2014. The titles of the eight symposia were Applied Computational Game Theory, Big Data Becomes Personal: Knowledge into Meaning, Formal Verification and Modeling in Human-Machine Systems, Implementing Selves with Safe Motivational Systems and Self-Improvement, The Intersection of Robust Intelligence and Trust in Autonomous Systems, Knowledge Representation and Reasoning in Robotics, Qualitative Representations for Robots, and Social Hacking and Cognitive Security on the Internet and New Media). This report contains summaries of the symposia, written, in most cases, by the cochairs of the symposium.


Preface

AAAI Conferences

This workshop collects interdisciplinary works in the integration of AI and robotics, with an emphasis toward the development of complete intelligent robots. The workshop will discuss questions like: (1) What are the methods and tools that can be transferred between the two fields? (2) What are the new research questions that must be addressed to enable this transfer? (3) What new application opportunities will be created? (4) What is the scientific profile needed to make progress in this combined field? (5) How can we foster the creation and consolidation of a truly integrated community?


Predicting Situated Behaviour Using Sequences of Abstract Spatial Relations

AAAI Conferences

The ability to understand behaviour is a crucial skill for Artificial Intelligence systems that are expected to interact with external agents such as humans or other AI systems. Such systems might be expected to operate in co-operative or team-based scenarios, such as domestic robots capable of helping out with household jobs, or disaster relief robots expected to collaborate and lend assistance to others. Conversely, they may also be required to hinder the activities of malicious agents in adversarial scenarios. In this paper we address the problem of modelling agent behaviour in domains expressed in continuous, quantitative space by applying qualitative, relational spatial abstraction techniques. We employ three common techniques for Qualitative Spatial Reasoning โ€” the Region Connection Calculus, the Qualitative Trajectory Calculus and the Star calculus. We then supply an algorithm based on analysis of Mutual Information that allows us to find the set of abstract, spatial relationships that provide high degrees of information about an agent's future behaviour. We employ the RoboCup soccer simulator as a base for movement-based tasks of our own design and compare the predictions of our system against those of systems utilising solely metric representations. Results show that use of a spatial abstraction-based representation, along with feature selection mechanisms, allows us to outperform metric representations on the same tasks.


Evolutionary Learning of Goal Priorities in a Real-Time Strategy Game

AAAI Conferences

However, due to the small numbers of goals present in existing systems, goal management Autonomous AI systems should be aware of their own goals is a relatively simple affair. Hanheide et al. (2010) describe and be capable of independently formulating behaviour to a system similar in architecture to our own that manages address them. We would ideally like to provide an agent with just two goals, whereas the one discussed in this paper must a collection of competences that allow it to act in novel situations manage upwards of forty. As the number of goals increases, that may not be predictable at design-time. In particular, the potential for goal conflict grows. This leads to a requirement we are interested in the operation of AI systems in for more sophisticated management processes, such as complex, oversubscribed domains where there may exist a dynamic goal re-prioritisation, allowing agents to alter their variety of ways to address high-level goals by composing behaviour to meet changing operational requirements. In the behaviours to achieve a set of sub-goals taken from a larger oversubscribed problem domains we are interested in, encoding set. Our research focusses how such sub-goals might be chosen all possible operating strategies at design time may (i.e.


Towards a Cognitive System that Can Recognize Spatial Regions Based on Context

AAAI Conferences

In order to collaborate with people in the real world, cognitive systems must be able to represent and reason about spatial regions in human environments. Consider the command "go to the front of the classroom". The spatial region mentioned (the front of the classroom) is not perceivable using geometry alone. Instead it is defined by its functional use, implied by nearby objects and their configuration. In this paper, we define such areas as context-dependent spatial regions and present a cognitive system able to learn them by combining qualitative spatial representations, semantic labels, and analogy. The system is capable of generating a collection of qualitative spatial representations describing the configuration of the entities it perceives in the world. It can then be taught context-dependent spatial regions using anchor pointsdefined on these representations. From this we then demonstrate how an existing computational model of analogy can be used to detect context-dependent spatial regions in previously unseen rooms. To evaluate this process we compare detected regions to annotations made on maps of real rooms by human volunteers.


Representing and Reasoning About Spatial Regions Defined by Context

AAAI Conferences

In order to collaborate with people in the real world, cognitive systems must be able to represent and reason about spatial regions in human environments. Consider the command "go to the front of the classroom". The spatial region mentioned (the front of the classroom) is not perceivable using geometry alone. Instead it is defined by its functional use, implied by nearby objects and their configuration. In this paper, we define such areas as context-dependent spatial regions and propose a method for a cognitive system to learn them incrementally by combining qualitative spatial representations, semantic labels, and analogy. Using data from a mobile robot, we generate a relational representation of semantically labeled objects and their configuration. Next, we show how the boundary of a context-dependent spatial region can be defined using anchor points. Finally, we demonstrate how an existing computational model of analogy can be used to transfer this region to a new situation.