Collaborating Authors


A Survey of Opponent Modeling in Adversarial Domains

Journal of Artificial Intelligence Research

Opponent modeling is the ability to use prior knowledge and observations in order to predict the behavior of an opponent. This survey presents a comprehensive overview of existing opponent modeling techniques for adversarial domains, many of which must address stochastic, continuous, or concurrent actions, and sparse, partially observable payoff structures. We discuss all the components of opponent modeling systems, including feature extraction, learning algorithms, and strategy abstractions. These discussions lead us to propose a new form of analysis for describing and predicting the evolution of game states over time. We then introduce a new framework that facilitates method comparison, analyze a representative selection of techniques using the proposed framework, and highlight common trends among recently proposed methods. Finally, we list several open problems and discuss future research directions inspired by AI research on opponent modeling and related research in other disciplines.

Randomized Classifiers vs Human Decision-Makers: Trustworthy AI May Have to Act Randomly and Society Seems to Accept This Artificial Intelligence

As \emph{artificial intelligence} (AI) systems are increasingly involved in decisions affecting our lives, ensuring that automated decision-making is fair and ethical has become a top priority. Intuitively, we feel that akin to human decisions, judgments of artificial agents should necessarily be grounded in some moral principles. Yet a decision-maker (whether human or artificial) can only make truly ethical (based on any ethical theory) and fair (according to any notion of fairness) decisions if full information on all the relevant factors on which the decision is based are available at the time of decision-making. This raises two problems: (1) In settings, where we rely on AI systems that are using classifiers obtained with supervised learning, some induction/generalization is present and some relevant attributes may not be present even during learning. (2) Modeling such decisions as games reveals that any -- however ethical -- pure strategy is inevitably susceptible to exploitation. Moreover, in many games, a Nash Equilibrium can only be obtained by using mixed strategies, i.e., to achieve mathematically optimal outcomes, decisions must be randomized. In this paper, we argue that in supervised learning settings, there exist random classifiers that perform at least as well as deterministic classifiers, and may hence be the optimal choice in many circumstances. We support our theoretical results with an empirical study indicating a positive societal attitude towards randomized artificial decision-makers, and discuss some policy and implementation issues related to the use of random classifiers that relate to and are relevant for current AI policy and standardization initiatives.

Explainable Goal-Driven Agents and Robots -- A Comprehensive Review Artificial Intelligence

Recent applications of autonomous agents and robots, such as self-driving cars, scenario-based trainers, exploration robots, and service robots have brought attention to crucial trust-related challenges associated with the current generation of artificial intelligence (AI) systems. AI systems based on the connectionist deep learning neural network approach lack capabilities of explaining their decisions and actions to others, despite their great successes. Without symbolic interpretation capabilities, they are black boxes, which renders their decisions or actions opaque, making it difficult to trust them in safety-critical applications. The recent stance on the explainability of AI systems has witnessed several approaches on eXplainable Artificial Intelligence (XAI); however, most of the studies have focused on data-driven XAI systems applied in computational sciences. Studies addressing the increasingly pervasive goal-driven agents and robots are still missing. This paper reviews approaches on explainable goal-driven intelligent agents and robots, focusing on techniques for explaining and communicating agents perceptual functions (example, senses, and vision) and cognitive reasoning (example, beliefs, desires, intention, plans, and goals) with humans in the loop. The review highlights key strategies that emphasize transparency, understandability, and continual learning for explainability. Finally, the paper presents requirements for explainability and suggests a roadmap for the possible realization of effective goal-driven explainable agents and robots.

An Ontological AI-and-Law Framework for the Autonomous Levels of AI Legal Reasoning Artificial Intelligence

A framework is proposed that seeks to identify and establish a set of robust autonomous levels articulating the realm of Artificial Intelligence and Legal Reasoning (AILR). Doing so provides a sound and parsimonious basis for being able to assess progress in the application of AI to the law, and can be utilized by scholars in academic pursuits of AI legal reasoning, along with being used by law practitioners and legal professionals in gauging how advances in AI are aiding the practice of law and the realization of aspirational versus achieved results. A set of seven levels of autonomy for AI and Legal Reasoning are meticulously proffered and mindfully discussed.

FLAIRS-32 Poster Abstracts

AAAI Conferences

The FLAIRS poster track is designed to promote discussion of emerging ideas and work in order to encourage and help guide researchers — especially new researchers — who are able to present a full poster in the conference poster session and receive that critical work-shaping feedback that helps guide good work into great work. Abstracts of those posters appear here, which we hope to see fully developed into future FLAIRS papers..

Making the positive case for artificial intelligence - CBR


In part, the critics of AI are driven by the knowledge that'white collar jobs' are the ones that are now under threat. Business leaders are frequently confronted by notions of job-killing automation and headlines on the variation of the theme that "Robots Will Steal Our Jobs." Elon Musk, CEO of Tesla, Silicon Valley figurehead, and champion of technology-driven innovation even goes a step further by suggesting AI is a fundamental threat to human civilisation. The robot on the assembly line is now a familiar image. AI in middle management is new.

Trust-Guided Behavior Adaptation Using Case-Based Reasoning

AAAI Conferences

The addition of a robot to a team can be difficult if the human teammates do not trust the robot. This can result in underutilization or disuse of the robot, even if the robot has skills or abilities that are necessary to achieve team goals or reduce risk. To help a robot integrate itself with a human team, we present an agent algorithm that allows a robot to estimate its trustworthiness and adapt its behavior accordingly. As behavior adaptation is performed, using case-based reasoning (CBR), information about the adaptation process is stored and used to improve the efficiency of future adaptations.

A Review of Real-Time Strategy Game AI

AI Magazine

This literature review covers AI techniques used for real-time strategy video games, focusing specifically on StarCraft. It finds that the main areas of current academic research are in tactical and strategic decision-making, plan recognition, and learning, and it outlines the research contributions in each of these areas. The paper then contrasts the use of game AI in academia and industry, finding the academic research heavily focused on creating game-winning agents, while the indus- try aims to maximise player enjoyment. It finds the industry adoption of academic research is low because it is either in- applicable or too time-consuming and risky to implement in a new game, which highlights an area for potential investi- gation: bridging the gap between academia and industry. Fi- nally, the areas of spatial reasoning, multi-scale AI, and co- operation are found to require future work, and standardised evaluation methods are proposed to produce comparable re- sults between studies.

AAAI Conferences Calendar

AI Magazine

This page includes forthcoming AAAI sponsored conferences, conferences presented by AAAI Affiliates, and conferences held in cooperation with AAAI. AI Magazine also maintains a calendar listing that includes nonaffiliated conferences at AIIDE-14 will be held FLAIRS-15 will be held May 18-20, 10th ACM/IEEE International Conference October 3-7 in Raleigh, NC, USA 2015 in Hollywood, Florida, USA on Human-Robot Interaction. ICAART 2014 will be held January 10-12 in Lisbon, Portugal International Joint Conference on AAAI Fall Symposium Series. ICCBR 2014 held January 10-12 in Lisbon, Portugal will be held September 29 - October 1 AAAI Spring Symposium.

A Survey of Artificial Intelligence Research at the IIIA

AI Magazine

It was founded in 1991 and, since 1994, has been located on the campus of the Autonomous University of Barcelona. IIIA grew out of an AI research group at the Center for Advanced Studies in Blanes (Spain) that started AI research in 1985. On average IIIA has had about 50 members per year during the last 12 years with a peak of almost 80 members in 2012. In total around 200 different people, including visiting researchers as well as master's and Ph.D. students, have been members of IIIA over the past 20 years. Seventy-seven students have completed their Ph.D. work at our Institute, 48 of them during the last 12 years.