Not enough data to create a plot.
Try a different view from the menu above.
Aha, David
Towards Deception Detection in a Language-Driven Game
Hancock, Will (Georgia Institute of Technology) | Floyd, Michael W. (Knexus Research) | Molineaux, Matthew (Knexus Research) | Aha, David (Naval Research Laboratory)
There are many real-world scenarios where agents must reliably detect deceit to make decisions. When deceitful statements are made, other statements or actions may make it possible to uncover the deceit. We describe a goal reasoning agent architecture that supports deceit detection by hypothesizing about an agent’s actions, uses new observations to revise past beliefs, and recognizes the plans and goals of other agents. In this paper, we focus on one module of our architecture, the Explanation Generator, and describe how it can generate hypotheses for a most probable truth scenario despite the presence of false information. We demonstrate its use in a multiplayer tabletop social deception game, One Night Ultimate Werewolf.
Dimensionality Reduced Reinforcement Learning for Assistive Robots
Curran, William (Oregon State University) | Brys, Tim (Vrije Universiteit Brussel) | Aha, David (Navy Center for Applied Research in AI) | Taylor, Matthew (Washington State University) | Smart, William D. (Oregon State University)
State-of-the-art personal robots need to perform complex manipulation tasks to be viable in assistive scenarios. However, many of these robots, like the PR2, use manipulators with high degrees-of-freedom, and the problem is made worse in bimanual manipulation tasks. The complexity of these robots lead to large dimensional state spaces, which are difficult to learn in. We reduce the state space by using demonstrations to discover a representative low-dimensional hyperplane in which to learn. This allows the agent to converge quickly to a good policy. We call this Dimensionality Reduced Reinforcement Learning (DRRL). However, when performing dimensionality reduction, not all dimensions can be fully represented. We extend this work by first learning in a single dimension, and then transferring that knowledge to a higher-dimensional hyperplane. By using our Iterative DRRL (IDRRL) framework with an existing learning algorithm, the agent converges quickly to a better policy by iterating to increasingly higher dimensions. IDRRL is robust to demonstration quality and can learn efficiently using few demonstrations. We show that adding IDRRL to the Q-Learning algorithm leads to faster learning on a set of mountain car tasks and the robot swimmers problem.
Case-Based Behavior Adaptation Using an Inverse Trust Metric
Floyd, Michael (Knexus Research) | Drinkwater, Michael (Knexus Research) | Aha, David (Naval Research Laboratory)
Robots are added to human teams to increase the team's skills or capabilities but in order to get the full benefit the teams must trust the robots. We present an approach that allows a robot to estimate its trustworthiness and adapt its behavior accordingly. Additionally, the robot uses case-based reasoning to store previous behavior adaptations and uses this information to perform future adaptations. In a simulated robotics domain, we compare case-based behavior adaption to behavior adaptation that does not learn and show it significantly reduces the number of behaviors that need to be evaluated before a trustworthy behavior is found.
Planning in Dynamic Environments: Extending HTNs with Nonlinear Continuous Effects
Molineaux, Matthew (Knexus Research Corporation) | Klenk, Matthew (Naval Research Laboratory) | Aha, David (Naval Research Laboratory)
Planning in dynamic continuous environments requires reasoning about nonlinear continuous effects, which previous Hierarchical Task Network (HTN) planners do not support. In this paper, we extend an existing HTN planner with a new state projection algorithm. To our knowledge, this is the first HTN planner that can reason about nonlinear continuous effects. We use a wait action to instruct this planner to consider continuous effects in a given state. We also introduce a new planning domain to demonstrate the benefits of planning with nonlinear continuous effects. We compare our approach with a linear continuous effects planner and a discrete effects HTN planner on a benchmark domain, which reveals that its additional costs are largely mitigated by domain knowledge. Finally, we present an initial application of this algorithm in a practical domain, a Navy training simulation, illustrating the utility of this approach for planning in dynamic continuous environments.
Goal-Driven Autonomy in a Navy Strategy Simulation
Molineaux, Matthew (Knexus Research Corporation) | Klenk, Matthew (Naval Research Laboratory) | Aha, David (Naval Research Laboratory)
Modern complex games and simulations pose many challenges for an intelligent agent, including partial observability, continuous time and effects, hostile opponents, and exogenous events. We present ARTUE (Autonomous Response to Unexpected Events), a domain-independent autonomous agent that dynamically reasons about what goals to pursue in response to unexpected circumstances in these types of environments. ARTUE integrates AI research in planning, environment monitoring, explanation, goal generation, and goal management. To explain our conceptualization of the problem ARTUE addresses, we present a new conceptual framework, goal-driven autonomy, for agents that reason about their goals. We evaluate ARTUE on scenarios in the TAO Sandbox, a Navy training simulation, and demonstrate its novel architecture, which includes components for Hierarchical Task Network planning, explanation, and goal management. Our evaluation shows that ARTUE can perform well in a complex environment and that each component is necessary and contributes to the performance of the integrated system.
Case-Based Reasoning Integrations
Marling, Cynthia, Sqalli, Mohammed, Rissland, Edwina, Munoz-Avila, Hector, Aha, David
This article presents an overview and survey of current work in case-based reasoning (CBR) integrations. There has been a recent upsurge in the integration of CBR with other reasoning modalities and computing paradigms, especially rule-based reasoning (RBR) and constraint-satisfaction problem (CSP) solving. CBR integrations with modelbased reasoning (MBR), genetic algorithms, and information retrieval are also discussed. This article characterizes the types of multimodal reasoning integrations where CBR can play a role, identifies the types of roles that CBR components can fulfill, and provides examples of integrated CBR systems.
Case-Based Reasoning Integrations
Marling, Cynthia, Sqalli, Mohammed, Rissland, Edwina, Munoz-Avila, Hector, Aha, David
This article presents an overview and survey of current work in case-based reasoning (CBR) integrations. There has been a recent upsurge in the integration of CBR with other reasoning modalities and computing paradigms, especially rule-based reasoning (RBR) and constraint-satisfaction problem (CSP) solving. CBR integrations with modelbased reasoning (MBR), genetic algorithms, and information retrieval are also discussed. This article characterizes the types of multimodal reasoning integrations where CBR can play a role, identifies the types of roles that CBR components can fulfill, and provides examples of integrated CBR systems. Past progress, current trends, and issues for future research are discussed.
AAAI 2000 Workshop Reports
Lesperance, Yves, Wagnerg, Gerd, Birmingham, William, Bollacke, Kurt r, Nareyek, Alexander, Walser, J. Paul, Aha, David, Finin, Tim, Grosof, Benjamin, Japkowicz, Nathalie, Holte, Robert, Getoor, Lise, Gomes, Carla P., Hoos, Holger H., Schultz, Alan C., Kubat, Miroslav, Mitchell, Tom, Denzinger, Joerg, Gil, Yolanda, Myers, Karen, Bettini, Claudio, Montanari, Angelo
The AAAI-2000 Workshop Program was held Sunday and Monday, 3031 July 2000 at the Hyatt Regency Austin and the Austin Convention Center in Austin, Texas. The 15 workshops held were (1) Agent-Oriented Information Systems, (2) Artificial Intelligence and Music, (3) Artificial Intelligence and Web Search, (4) Constraints and AI Planning, (5) Integration of AI and OR: Techniques for Combinatorial Optimization, (6) Intelligent Lessons Learned Systems, (7) Knowledge-Based Electronic Markets, (8) Learning from Imbalanced Data Sets, (9) Learning Statistical Models from Rela-tional Data, (10) Leveraging Probability and Uncertainty in Computation, (11) Mobile Robotic Competition and Exhibition, (12) New Research Problems for Machine Learning, (13) Parallel and Distributed Search for Reasoning, (14) Representational Issues for Real-World Planning Systems, and (15) Spatial and Temporal Granularity.
AAAI 2000 Workshop Reports
Lesperance, Yves, Wagnerg, Gerd, Birmingham, William, Bollacke, Kurt r, Nareyek, Alexander, Walser, J. Paul, Aha, David, Finin, Tim, Grosof, Benjamin, Japkowicz, Nathalie, Holte, Robert, Getoor, Lise, Gomes, Carla P., Hoos, Holger H., Schultz, Alan C., Kubat, Miroslav, Mitchell, Tom, Denzinger, Joerg, Gil, Yolanda, Myers, Karen, Bettini, Claudio, Montanari, Angelo
The AAAI-2000 Workshop Program was held Sunday and Monday, 3031 July 2000 at the Hyatt Regency Austin and the Austin Convention Center in Austin, Texas. The 15 workshops held were (1) Agent-Oriented Information Systems, (2) Artificial Intelligence and Music, (3) Artificial Intelligence and Web Search, (4) Constraints and AI Planning, (5) Integration of AI and OR: Techniques for Combinatorial Optimization, (6) Intelligent Lessons Learned Systems, (7) Knowledge-Based Electronic Markets, (8) Learning from Imbalanced Data Sets, (9) Learning Statistical Models from Rela-tional Data, (10) Leveraging Probability and Uncertainty in Computation, (11) Mobile Robotic Competition and Exhibition, (12) New Research Problems for Machine Learning, (13) Parallel and Distributed Search for Reasoning, (14) Representational Issues for Real-World Planning Systems, and (15) Spatial and Temporal Granularity.