If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
The ability to say "no" in a variety of ways and contexts is an essential part of being socio-cognitively human. Through a variety of examples, we show that, despite ominous portrayals in science fiction, AI agents with human-inspired noncompliance abilities have many potential benefits. Rebel agents are intelligent agents that can oppose goals or plans assigned to them, or the general attitudes or behavior of other agents. They can serve purposes such as ethics, safety, and task execution correctness, and provide or support diverse points of view. We present a framework to help categorize and design rebel agents, discuss their social and ethical implications, and assess their potential benefits and the risks they may pose. In recognition of the fact that, in human psychology, non-compliance has profound socio-cognitive implications, we also explore socio-cognitive dimensions of AI rebellion: social awareness and counternarrative intelligence. This latter term refers to an agent's ability to produce and use alternative narratives that support, express, or justify rebellion, either sincerely or deceptively. We encourage further conversation about AI rebellion within the AI community and beyond, given the inherent interdisciplinarity of the topic.
Molineaux, Matthew (Knexus Research Corporation) | Dannenhauer, Dustin (NRC Postdoctoral Fellow, Naval Research Laboratory) | Aha, David W. (Naval Research Laboratory)
Non-player characters (NPCs) in video games are a common form of frustration for players because they generally provide no explanations for their actions or provide simplistic explanations using fixed scripts. Motivated by this, we consider a new design for agents that can learn about their environments, accomplish a range of goals, and explain what they are doing to a supervisor. We propose a framework for studying this type of agent, and compare it to existing reinforcement learning and self-motivated agent frameworks. We propose a novel design for an initial agent that acts within this framework. Finally, we describe an evaluation centered around the supervisor's satisfaction and understanding of the agent's behavior.
Bohg, Jeannette (Max Planck Institute for Intelligent Systems) | Boix, Xavier (Massachusetts Institute of Technology) | Chang, Nancy (Google) | Churchill, Elizabeth F. (Google) | Chu, Vivian (Georgia Institute of Technology) | Fang, Fei (Harvard University) | Feldman, Jerome (University of California at Berkeley) | González, Avelino J. (University of Central Florida) | Kido, Takashi (Preferred Networks in Japan) | Lawless, William F. (Paine College) | Montaña, José L. (University of Cantabria) | Ontañón, Santiago (Drexel University) | Sinapov, Jivko (University of Texas at Austin) | Sofge, Don (Naval Research Laboratory) | Steels, Luc (Institut de Biologia Evolutiva) | Steenson, Molly Wright (Carnegie Mellon University) | Takadama, Keiki (University of Electro-Communications) | Yadav, Amulya (University of Southern California)
Temporal logics have been used in autonomous planning to represent and reason about temporal planning problems. However, such techniques have typically been restricted to either (1) representing actions, events, and goals with temporal properties or (2) planning for temporally-extended goals under restrictive assumptions. We introduce Mixed Propositional Metric Temporal Logic (MPMTL) where formulae are built over mixed binary and continuous real variables. We introduce a planner, MTP, that solves MPMTL problems and includes a SAT-solver, model checker for a polynomial fragment of MPMTL, and a forward search algorithm. We extend PDDL 2.1 with MPMTL syntax to create MPDDL and an associated parser. The empirical study shows that MTP outperforms the state-of-the-art PDDL+ planner SMTPlan+ on several domains it performed best on and MTP performs and scales on problem size well for challenging domains with rich temporal properties we create.
Aha, David W. (Naval Research Laboratory) | Coman, Alexandra (National Research Council and the Naval Research Laboratory)
Sci-fi narratives permeating the collective consciousness endow AI Rebellion with ample negative connotations. However, for AI agents, as for humans, attitudes of protest, objection, and rejection have many potential benefits in support of ethics, safety, self-actualization, solidarity, and social justice, and are necessary in a wide variety of contexts. We launch a conversation on constructive AI rebellion and describe a framework meant to support discussion, implementation, and deployment of AI Rebel Agents as protagonists of positive narratives.
Heuristics serve as a powerful tool in modern domain-independent planning (DIP) systems by providing critical guidance during the search for high-quality solutions. However, they have not been broadly used with hierarchical planning techniques, which are more expressive and tend to scale better in complex domains by exploiting additional domain-specific knowledge. Complicating matters, we show that for Hierarchical Goal Network (HGN) planning, a goal-based hierarchical planning formalism that we focus on in this paper, any poly-time heuristic that is derived from a delete-relaxation DIP heuristic has to make some relaxation of the hierarchical semantics. To address this, we present a principled framework for incorporating DIP heuristics into HGN planning using a simple relaxation of the HGN semantics we call Hierarchy-Relaxation. This framework allows for computing heuristic estimates of HGN problems using any DIP heuristic in an admissibility-preserving manner. We demonstrate the feasibility of this approach by using the LMCut heuristic to guide an optimal HGN planner. Our empirical results with three benchmark domains demonstrate that simultaneously leveraging hierarchical knowledge and heuristic guidance substantially improves planning performance.
Coman, Alexandra (National Research Council/Naval Research Laboratory) | Johnson, Benjamin (National Research Council/Naval Research Laboratory) | Briggs, Gordon (National Research Council/Naval Research Laboratory) | Aha, David W. (Naval Research Laboratory)
Human attitudes of objection, protest, and rebellion have undeniable potential to bring about social benefits, from social justice to healthy balance in relationships. At times, they can even be argued to be ethically obligatory. Conversely, AI rebellion is largely seen as a dangerous, destructive prospect. With the increase of interest in collaborative human/AI environments in which synthetic agents play social roles or, at least, exhibit behavior with social and ethical implications, we believe that AI rebellion could have benefits similar to those of its counterpart in humans. We introduce a framework meant to help categorize and design Rebel Agents, discuss their social and ethical implications, and assess their potential benefits and the risks they may pose. We also present AI rebellion scenarios in two considerably different contexts (military unmanned vehicles and computational social creativity) that exemplify components of the framework.
Menager, David (University of Kansas) | Choi, Dongkyu (University of Kansas) | Floyd, Michael W. (Knexus Research Corporation) | Task, Christine (Knexus Research Corporation) | Aha, David W. (Naval Research Laboratory)
In goal recognition, the basic problem domain consists of the following: Recent advances in robotics and artificial intelligence have brought a variety of assistive robots designed to help humans - a set E of environment fluents; accomplish their goals. However, many have limited autonomy and lack the ability to seamlessly integrate with - a state S that is a value assignment to those fluents; human teams. One capability that can facilitate such humanrobot - a set A of actions that describe potential transitions between teaming is the robot's ability to recognize its teammates' states (with preconditions and effects defined over goals, and react appropriately. This function permits E, and parameterized over a set of environment objects the robot to actively assist the team and avoid performing O); and redundant or counterproductive actions.
Ahmed, Nisar (University of Colorado, Boulder) | Bello, Paul (Naval Research Laboratory) | Bringsjord, Selmer (Rensselaer Polytechnic Institute) | Clark, Micah (US Navy Office of Naval Research) | Hayes, Bradley (Massachusetts Institute of Technology) | Miller, Christopher (Smart Information Flow Technologies) | Oliehoek, Frans (University of Amsterdam) | Stein, Frank (IBM) | Spaan, Matthijs (Delft University of Technology,)
The Association for the Advancement of Artificial Intelligence presented the 2015 Fall Symposium Series, on Thursday through Saturday, November 12-14, at the Westin Arlington Gateway in Arlington, Virginia. The titles of the six symposia were as follows: AI for Human-Robot Interaction, Cognitive Assistance in Government and Public Sector Applications, Deceptive and Counter-Deceptive Machines, Embedded Machine Learning, Self-Confidence in Autonomous Systems, and Sequential Decision Making for Intelligent Agents. This article contains the reports from four of the symposia.