Goto

Collaborating Authors

 Floyd, Michael W.


Dungeon Crawl Stone Soup as an Evaluation Domain for Artificial Intelligence

arXiv.org Artificial Intelligence

Dungeon Crawl Stone Soup is a popular, single-player, free and open-source rogue-like video game with a sufficiently complex decision space that makes it an ideal testbed for research in cognitive systems and, more generally, artificial intelligence. This paper describes the properties of Dungeon Crawl Stone Soup that are conducive to evaluating new approaches of AI systems. We also highlight an ongoing effort to build an API for AI researchers in the spirit of recent game APIs such as MALMO, ELF, and the Starcraft II API. Dungeon Crawl Stone Soup's complexity offers significant opportunities for evaluating AI and cognitive systems, including human user studies. In this paper we provide (1) a description of the state space of Dungeon Crawl Stone Soup, (2) a description of the components for our API, and (3) the potential benefits of evaluating AI agents in the Dungeon Crawl Stone Soup video game.


Special Track on Case-Based Reasoning

AAAI Conferences

Case-based reasoning (CBR) is an artificial intelligence problem solving and learning methodology that retrieves and adapts previous experiences to fit newly encountered situations. This special track, currently in its 18th year, serves as an annual forum for researchers to present and discuss developments in CBR theory and application. Mirroring the annual International Conference on Case-Based Reasoning, this year’s special track has attracted a variety of high-quality submissions that present many valuable theoretical contributions and application domains. Although the CBR special track serves an important role as a focal point for the North American CBR community, this year continues the tradition of strong international participation. We would like to thank everyone who contributed to the success of this special track, especially the authors, the program committee members, the additional reviewers, and the FLAIRS conference organizers.


Human-Agent Teaming as a Common Problem for Goal Reasoning

AAAI Conferences

Human-agent teaming is a difficult yet relevant problem domain to which many goal reasoning systems are well suited, due to their ability to accept outside direction and (relatively) human-understandable internal state. We propose a formal model, and multiple variations on a multi-agent problem, to clarify and unify research in goal reasoning. We describe examples of these concepts, and propose standard evaluation methods for goal reasoning agents that act as a member of a team or on behalf of a supervisor.


Towards Deception Detection in a Language-Driven Game

AAAI Conferences

There are many real-world scenarios where agents must reliably detect deceit to make decisions. When deceitful statements are made, other statements or actions may make it possible to uncover the deceit. We describe a goal reasoning agent architecture that supports deceit detection by hypothesizing about an agent’s actions, uses new observations to revise past beliefs, and recognizes the plans and goals of other agents. In this paper, we focus on one module of our architecture, the Explanation Generator, and describe how it can generate hypotheses for a most probable truth scenario despite the presence of false information. We demonstrate its use in a multiplayer tabletop social deception game, One Night Ultimate Werewolf.


Dynamic Goal Recognition Using Windowed Action Sequences

AAAI Conferences

In goal recognition, the basic problem domain consists of the following: Recent advances in robotics and artificial intelligence have brought a variety of assistive robots designed to help humans - a set E of environment fluents; accomplish their goals. However, many have limited autonomy and lack the ability to seamlessly integrate with - a state S that is a value assignment to those fluents; human teams. One capability that can facilitate such humanrobot - a set A of actions that describe potential transitions between teaming is the robot's ability to recognize its teammates' states (with preconditions and effects defined over goals, and react appropriately. This function permits E, and parameterized over a set of environment objects the robot to actively assist the team and avoid performing O); and redundant or counterproductive actions.


Adapting Autonomous Behavior Based on an Estimate of an Operator's Trust

AAAI Conferences

Robots can be added to human teams to provide improved capabilities or to perform tasks that humans are unsuited for. However, in order to get the full benefit of the robots the human teammates must use the robots in the appropriate situations. If the humans do not trust the robots, they may underutilize them or disuse them which could result in a failure to achieve team goals. We present a robot that is able to estimate its trustworthiness and adapt its behavior accordingly. This technique helps the robot remain trustworthy even when changes in context, task or teammates are possible.


Special Track on Case-Based Reasoning

AAAI Conferences

Over the past 11 years, this FLAIRS special track program has provided a focal point for the North American case-based reasoning (CBR) community, though it has drawn good international participation as well. Five papers were accepted this year. Ontañón presents seven different case acquisition techniques for CBR systems that use learning from demonstration and performs a comparative evaluation in the context of real-time strategy games. Ontañón and Plaza describe a preliminary formal model of knowledge transfer in case-based inference based on the idea of partial unification. Jalali and Leake present a new approach for ordering questions in conversational CBR systems that takes into account not just their discriminativeness but also the user's ability to answer.