Goto

Collaborating Authors

 Aha, David W.


Human-Centric Goal Reasoning with Ripple-Down Rules

arXiv.org Artificial Intelligence

ActorSim is a goal reasoning framework developed at the Naval Research Laboratory. Originally, all goal reasoning rules were hand-crafted. This work extends ActorSim with the capability of learning by demonstration, that is, when a human trainer disagrees with a decision made by the system, the trainer can take over and show the system the correct decision. The learning component uses Ripple-Down Rules (RDR) to build new decision rules to correctly handle similar cases in the future. The system is demonstrated using the RoboCup Rescue Agent Simulation, which simulates a city-wide disaster, requiring emergency services, including fire, ambulance and police, to be dispatched to different sites to evacuate civilians from dangerous situations. The RDRs are implemented in a scripting language, FrameScript, which is used to mediate between ActorSim and the agent simulator. Using Ripple-Down Rules, ActorSim can scale to an order of magnitude more goals than the previous version.


Interpretable ML for Imbalanced Data

arXiv.org Artificial Intelligence

Deep learning models are being increasingly applied to imbalanced data in high stakes fields such as medicine, autonomous driving, and intelligence analysis. Imbalanced data compounds the black-box nature of deep networks because the relationships between classes may be highly skewed and unclear. This can reduce trust by model users and hamper the progress of developers of imbalanced learning algorithms. Existing methods that investigate imbalanced data complexity are geared toward binary classification, shallow learning models and low dimensional data. In addition, current eXplainable Artificial Intelligence (XAI) techniques mainly focus on converting opaque deep learning models into simpler models (e.g., decision trees) or mapping predictions for specific instances to inputs, instead of examining global data properties and complexities. Therefore, there is a need for a framework that is tailored to modern deep networks, that incorporates large, high dimensional, multi-class datasets, and uncovers data complexities commonly found in imbalanced data (e.g., class overlap, sub-concepts, and outlier instances). We propose a set of techniques that can be used by both deep learning model users to identify, visualize and understand class prototypes, sub-concepts and outlier instances; and by imbalanced learning algorithm developers to detect features and class exemplars that are key to model performance. Our framework also identifies instances that reside on the border of class decision boundaries, which can carry highly discriminative information. Unlike many existing XAI techniques which map model decisions to gray-scale pixel locations, we use saliency through back-propagation to identify and aggregate image color bands across entire classes. Our framework is publicly available at \url{https://github.com/dd1github/XAI_for_Imbalanced_Learning}


Dungeon Crawl Stone Soup as an Evaluation Domain for Artificial Intelligence

arXiv.org Artificial Intelligence

Dungeon Crawl Stone Soup is a popular, single-player, free and open-source rogue-like video game with a sufficiently complex decision space that makes it an ideal testbed for research in cognitive systems and, more generally, artificial intelligence. This paper describes the properties of Dungeon Crawl Stone Soup that are conducive to evaluating new approaches of AI systems. We also highlight an ongoing effort to build an API for AI researchers in the spirit of recent game APIs such as MALMO, ELF, and the Starcraft II API. Dungeon Crawl Stone Soup's complexity offers significant opportunities for evaluating AI and cognitive systems, including human user studies. In this paper we provide (1) a description of the state space of Dungeon Crawl Stone Soup, (2) a description of the components for our API, and (3) the potential benefits of evaluating AI agents in the Dungeon Crawl Stone Soup video game.


AI Rebel Agents

AI Magazine

The ability to say "no" in a variety of ways and contexts is an essential part of being socio-cognitively human. Through a variety of examples, we show that, despite ominous portrayals in science fiction, AI agents with human-inspired noncompliance abilities have many potential benefits. Rebel agents are intelligent agents that can oppose goals or plans assigned to them, or the general attitudes or behavior of other agents. They can serve purposes such as ethics, safety, and task execution correctness, and provide or support diverse points of view. We present a framework to help categorize and design rebel agents, discuss their social and ethical implications, and assess their potential benefits and the risks they may pose. In recognition of the fact that, in human psychology, non-compliance has profound socio-cognitive implications, we also explore socio-cognitive dimensions of AI rebellion: social awareness and counternarrative intelligence. This latter term refers to an agent's ability to produce and use alternative narratives that support, express, or justify rebellion, either sincerely or deceptively. We encourage further conversation about AI rebellion within the AI community and beyond, given the inherent interdisciplinarity of the topic.


On Reproducible AI: Towards Reproducible Research, Open Science, and Digital Scholarship in AI Publications

AI Magazine

Background: Science is experiencing a reproducibility crisis. Artificial intelligence research is not an exception. Objective: To give practical and pragmatic recommendations for how to document AI research so that the results are reproducible. Method: Our analysis of the literature shows that AI publications fall short of providing enough documentation to facilitate reproducibility. Our suggested best practices are based on a framework for reproducibility and recommendations given for other disciplines. Results: We have made an author checklist based on our investigation and provided examples for how every item in the checklist can be documented. Conclusion: We encourage reviewers to use the suggested best practices and author checklist when reviewing submissions for AAAI publications and future AAAI conferences.


Goal Reasoning: Foundations, Emerging Applications, and Prospects

AI Magazine

Goal reasoning (GR) has a bright future as a foundation for the research and development of intelligent agents. GR is the study of agents that can deliberate on and self-select their goals/objectives, which is a desirable capability for some applications of deliberative autonomy. While studied in diverse AI sub-communities for multiple applications, our group has focused on how GR can play a key role for controlling autonomous systems. Thus, its importance is rapidly growing and it merits increased attention, particularly from the perspective of research on AI safety. In this article, I introduce GR, briefly relate it to other AI topics, summarize some of our group’s work on GR foundations and emerging applications, and describe some current and future research directions.


The 25th International Conference on Case-Based Reasoning

AI Magazine

Usually, a CBR process is composed of four steps, namely: retrieve (selection of one or several case(s) from the base); reuse (adaptation of the retrieved case(s) to solve the new problem); revise (presentation of the newly formed case to application domain experts and, as appropriate, corrections to it); and retain (addition of the revised case to the case base, if this addition is judged useful). CBR is an active field of ICCBR is not only an important venue for presenting research that is application-and theory-driven, and it CBRrelated research. It is also an important relates to both machine learning and knowledge representation. Generous funding from NTNU, the Norwegian Each day of the conference began with an invited Research Council, and our other sponsors allowed talk. On the first day, Henri Prade presented an introduction the conference to cover all the meals for the attendees to analogical proportions and analogical reasoning during the conference.


Towards Explainable NPCs: A Relational Exploration Learning Agent

AAAI Conferences

Non-player characters (NPCs) in video games are a common form of frustration for players because they generally provide no explanations for their actions or provide simplistic explanations using fixed scripts. Motivated by this, we consider a new design for agents that can learn about their environments, accomplish a range of goals, and explain what they are doing to a supervisor. We propose a framework for studying this type of agent, and compare it to existing reinforcement learning and self-motivated agent frameworks. We propose a novel design for an initial agent that acts within this framework. Finally, we describe an evaluation centered around the supervisor's satisfaction and understanding of the agent's behavior.


Human-Agent Teaming as a Common Problem for Goal Reasoning

AAAI Conferences

Human-agent teaming is a difficult yet relevant problem domain to which many goal reasoning systems are well suited, due to their ability to accept outside direction and (relatively) human-understandable internal state. We propose a formal model, and multiple variations on a multi-agent problem, to clarify and unify research in goal reasoning. We describe examples of these concepts, and propose standard evaluation methods for goal reasoning agents that act as a member of a team or on behalf of a supervisor.


Comparing Reward Shaping, Visual Hints, and Curriculum Learning

AAAI Conferences

Common approaches to learn complex tasks in reinforcement learning include reward shaping, environmental hints, or a curriculum. Yet few studies examine how they compare to each other, when one might prefer one approach, or how they may complement each other. As a first step in this direction, we compare reward shaping, hints, and curricula for a Deep RL agent in the game of Minecraft. We seek to answer whether reward shaping, visual hints, or the curricula have the most impact on performance, which we measure as the time to reach the target, the distance from the target, the cumulative reward, or the number of actions taken. Our analyses show that performance is most impacted by the curriculum used and visual hints; shaping had less impact. For similar navigation tasks, the results suggest that designing an effective curriculum and providing appropriate hints most improve the performance. Common approaches to learn complex tasks in reinforcement learning include reward shaping, environmental hints, or a curriculum, yet few studies examine how they compare to each other. We compare these approaches for a Deep RL agent in the game of Minecraft and show performance is most impacted by the curriculum used and visual hints; shaping had less impact. For similar navigation tasks, this suggests that designing an effective curriculum with hints most improve the performance.